This HTML report was generated by: default default (user defined) using ValidR online Shiny application with R version 4.2.2 (2022-10-31) statistical software. The quality of the report rendering (e.g. math symbols) may depend on the browser and requires javascript to be enabled.

All figures are interactive and can be filtered, zoomed, downloaded (publication quality svg format) using menu bar on the upper right corner or the legend. Hovering over lines and points on the graph will display available informations.

Incorrect use of the application or templates can lead to errors.

The results are provided “as is” and should be reviewed and interpreted. Neither the designers nor the providers of the application can be held responsible for any errors or adverse consequences resulting from the use of the app or this report.

Uploaded data were reported at the end within the section 4.

1 Methods

Validation of assay methods is based on the use of calibration and validation standards made by accurately weighing predetermined quantities of analytes as required. The experiments are generally carried out on multiple series (usually carried out on different days) on multiple concentration levels with multiple replicates per level and per series.

In order to complete this report, it is recommended to annex the exact drug formula with the reference of each of the raw materials used as well as a complete description of pre-analytical and analytical methods used.

1.1 Calibration curves

The estimation of the parameters of the calibration curves were obtained from the ordinary least squares method (OLS) using the r stats::lm() function.

When possible various linear regressions were tested:

  • Linear trough 0 function (Linear 0)

1.2 Estimation of trueness and precision

1.2.1 Models

Considering \(i \in [1;p]\) (series index), \(j \in [1,m]\) (concentration index), \(k\in [1;n]\) (repetition index) and \(x_{i,j,k, calc}\) the \(k\)th back calculated concentration of the \(i\) series \(j\) level (using calibration standards) :

\[ x_{i,j,k, calc} = \mu_j + \alpha_{i,j} + \epsilon_{i,j,k} \]

With : \(\begin{cases} \mu_j &: \text{the back calculated mean of the $j$ level} \\ \alpha_{i,j} &: \text{at $j$ level, the difference between the $i$th series average and the $\mu_j$}\\ \epsilon_{i,j,k} &: \text{the experimental error} \end{cases}\)

\(\begin{cases}\alpha_{i,j} &\sim \mathcal{N}(0,\sigma^2_{B,j}) \; \text{with $\sigma^2_{B,j}$ representing the interseries variances} \\ \epsilon_{i,j,k} &\sim \mathcal{N}(0, \sigma^2_{W,j}) \; \text{with $\sigma^2_{W,j}$ representing the intraseries variances}\end{cases}\)

1.2.2 Trueness

Trueness expresses the closeness of agreement between the mean value obtained from a series of test results and an accepted reference value. The trueness gives an indication of the systemic errors. It was calculated at each concentration level by calculating the difference between the introduced concentrations (arithmetic) mean \(\overline{x}_j\) and the calculated concentrations mean \(\hat{\mu}_j\).

The bias (for j th level) was expressed as: \(\text{Bias}_j =\hat{\mu}_j-\overline{x}_j\)

And the relative bias as: \(\text{Bias(%)}_j = \frac{\hat{\mu}_j-\overline{x}_j}{\overline{x}_j}\times 100\)

The recovery (for j th level) was expressed as : \(\text{Recovery(%)}_j = \frac{\hat{\mu}_j}{\overline{x}_j}\times 100\)

1.2.3 Fidelity & intermediate precision

Fidelity expresses the closeness of agreement between a series of measurements from multiple takes of the same homogeneous sample, under prescribed conditions. It provides information on random errors and is assessed at two levels, repeatability and intermediate precision.

Intra and interseries variance could be estimated at every \(j\) level using the restricted maximum likelihood method:

\(\begin{align} \hat{\mu}_j &= \frac{1}{\sum^p_{i=1}n_{i,j}}\cdot\sum^p_{i=1}\sum^{n_{i,j}}_{k=1} x_{i,j,k, calc} \\ \text{MSM}_j &= \frac{1}{p-1} \cdot \sum^p_{i=1}n_{i,j}\cdot \left(x_{i,j, calc}-\overline{x}_{i,j, calc}\right)^2 \\ \text{MSE}_j &=\frac{1}{\sum^p_{i=1}n_{i,j}-p}\cdot\sum^p_{i=1}\sum^{n_{i,j}}_{k=1} \left(x_{i,j, calc}-\overline{x}_{i,j, calc}\right)^2 \end{align}\)

\(\text{if MSE$_j$ < MSM$_j$} \begin{cases} \hat{\sigma}^2_{W,j} &=\text{MSE}_j \\ \hat{\sigma}^2_{B,j} &= \frac{\text{MSM}_j-\text{MSE}_j}{n} \end{cases}\)

\(\text{else} \begin{cases} \hat{\sigma}^2_{W,j} &=\frac{1}{pn - 1}\cdot\sum^p_{i=1}\sum^{k}_{j=1} \left(x_{i,j,k calc}-\overline{x}_{j, calc}\right)^2 \\ \hat{\sigma}^2_{B,j} &= 0 \end{cases}\)

Intermediate precision was then calculated: \(\hat{\sigma}^2_{IP,j} = \hat{\sigma}^2_{W,j} + \hat{\sigma}^2_{B,j}\)

Each corresponding coefficient of variation was determined as: \(\text{CV(%)} = \frac{\sigma_j}{\hat{\mu}_j}\)

1.2.4 Tolerance interval and accuracy profiles

The tolerance interval is a statistical interval within which, with some confidence level, a specified proportion (\(\beta\)) of a sampled population falls. The construction of the tolerance intervals using standard solutions therefore makes it possible to predict with a some confidence levels where a proportion of the dosages will falls.

The tolerance interval has been computed, at each concentration levels using validation standards as follows:

  • \(\hat{\mu}_j \pm \mathcal{Q}t(\upsilon,\frac{1 + \beta}{2})\cdot\sqrt{1 + \frac{1}{pn\cdot B^2_j}}\cdot\hat\sigma_{IP,j}\)

  • \(\text{Bias(%)} \pm \mathcal{Q}_t(\upsilon,\frac{1 + \beta}{2}) \cdot \sqrt{1 + \frac{1}{pn\cdot B^2_j}} \cdot \hat{ \text{CV}}_{IP,j}\)

\(with \begin{cases}\begin{align} R_j &= \frac{\sigma^2_{B,j}}{\sigma^2_{W,j}} \\ B_j &= \sqrt{\frac{R_j +1}{n\cdot R_j + 1}} \\ \upsilon &= \frac{(R+1)^2}{\frac{(R + (1/n))^2}{p - 1} + \frac{(1 - 1/n)}{pn} } \end{align}\end{cases}\)

\(\mathcal{Q}_t(\upsilon,\frac{1 + \beta}{2})\) corresponded to the \(\beta\) quantile of the Student \(t\) distribution with \(\upsilon\) degrees of freedom.

  • The \(\beta\) value used for the calculation of the \(\beta\)-expectation tolerance interval was set to: 0.8 (user defined).

The accuracy profiles (─) were plotted joining the tolerance intervals obtained for each of the levels tested. A method is validated over the full range of measurements, when the accuracy profile (─) is fully included between the upper and lower bound of the acceptability limit \([-\lambda;+\lambda]\) (┄) set a priori. The method can only be used over the concentration range for which the accuracy profile is entirely within the acceptability limit \([-\lambda;+\lambda]\).

  • The acceptability value used was set to: 10 % (user defined).

1.2.5 Determination of quantification limits.

When the accuracy profile (─) is not fully included within the acceptability limit \([-\lambda;+\lambda]\) ( ┄), a limit of quantification can be determined at the intersection of the accuracy profile with the upper (\(+\lambda\)) or lower (\(-\lambda\)) acceptability limit. This can happen at low or high concentrations and can produce lower or higher LOQs respectively.

The coordinates of the intersections between the accuracy profiles and the acceptability limits were calculated and plotted on the tolerance profiles (✴)

  • The lower LOQ is the lowest amount of analyte in a sample which can be quantitatively determined with suitable precision and accuracy. It should be determined at lower levels using the last intersection point between accuracy profile and acceptance limits, before accuracy profile become fully included in acceptance limits. It’s the first level tested if accuracy profile is fully inside acceptance limits at this level.

  • The upper LOQ is the highest amount of analyte in a sample which can be quantitatively determined with suitable precision and accuracy. It should be determined at higher levels using the first intersection point between accuracy profile and acceptance limits, before accuracy profile partly or fully exceeds acceptance limits. It is the last level tested if accuracy profile is fully inside acceptance limits from the lower LOQ to this level.

2 Analysis of response function (calibration curves)

2.1 Methods and Data

Response functions \(signal=f(concentration)\) were analysed using stats4::lm() function in R, using calibration data provided in Table 4.1 and Figure 4.1) (see section 4).

Analysis was performed using :

  • Linear trough 0 function (Linear 0)

2.2 Response functions obtained

2.2.1 Regression analysis

The interactive table 2.1 shows the values obtained with regressions :

Table 2.1: Results of linear regressions performed
Methods Serie Intercept Slope AIC R²
Linear 0 (LM) 1 0 7559372 186.8493 0.9991447
Linear 0 (LM) 2 0 7149831 159.2975 0.9999694
Linear 0 (LM) 3 0 7077097 177.1051 0.9997112

2.2.2 Residues for each calibration curve at each level

The residues obtained are shown in the interactive figure 2.1.

Figure 2.1: Relative bias calculated from regression

3 Validation

3.1 Using a linear forced through 0 calibration curve

3.1.1 Trueness and precision obtained

Trueness and precision are depicted on the table 3.1 and 3.2

Table 3.1: Trueness and precision estimators and limits ( LIN_0 )
Introduced concentrations default Mean calculated concentrations default Bias default Repeatability SD default Between series SD default Intermediate precision SD default Low limit of tolerance default High limit of tolerance default
0.0005 0.0003176 -0.0001824 0.0001376 0.0e+00 0.0001376 0.0001219 0.0005133
0.0015 0.0012839 -0.0002161 0.0002636 0.0e+00 0.0002636 0.0009091 0.0016587
0.0200 0.0192220 -0.0007780 0.0014294 8.0e-07 0.0016996 0.0166384 0.0218056
0.2000 0.1999679 -0.0000321 0.0114542 7.5e-06 0.0117785 0.1830626 0.2168732
Table 3.2: Trueness and precision estimators and limits ( LIN_0 )
Introduced concentrations default Mean calculated concentrations default Bias (%) Recovery (%) CV repeatability (%) CV intermediate precision (%) Low limit of tolerance (%) High limit of tolerance (%) Results
0.0005 0.0003176 -36.4882832 63.51172 27.525332 27.525332 -75.627610 2.651044 FAIL
0.0015 0.0012839 -14.4058986 85.59410 17.570893 17.570893 -39.390627 10.578829 FAIL
0.0200 0.0192220 -3.8899655 96.11003 7.146753 8.497997 -16.808072 9.028141 FAIL
0.2000 0.1999679 -0.0160614 99.98394 5.727122 5.889251 -8.468712 8.436589 PASS

3.1.2 Accuracy profile

Accuracy profile is shown in Figure 3.1. The 𝛽-tolerance interval (─) should be entirely within the acceptance limits (┄). Coordinates of points of intersection between the tolerance interval and the acceptability limit were marked with a red star (✴) when present.

Figure 3.1: Accuracy profiles (red dashed line: acceptance limits, blue lines: 𝛽-tolerance intervals).

3.1.3 Linearity Profile

Linearity profile is shown in Figure 3.2. You should check that the black line (linear model) should be superimposed on the red dashed identity line.

Figure 3.2: Linearity profiles (red dashed line: identity line, black lines: linear regression lines, blue lines: 𝛽-tolerance intervals).

3.1.4 Linear regression

The p-value for the intercept is > 0.05 which is in favor of a non-significant value (PASS)

The p-value for the slope is < 0.05 which is in favor of a significant value (PASS)

Table 3.3: Results of the linear regression.
Estimate CI (lower) CI (upper) Std. Error t value Pr(>|t|)
(Intercept) -0.0003905 -0.0024046 0.0016237 0.0010006 -0.3902287 0.698
x 1.0015911 0.9815504 1.0216317 0.0099561 100.6004122 <0.001 ***

3.1.5 Studentized Breusch-Pagan test for heteroskedasticity

The p-value < 0.05 which is in favor of heterosedasticity and standard deviations of a calculated concentrations as related to introduced concentrations, are non-constant (this should be investigated)

Result of the studentized Breusch-Pagan test
Test statistic df P value
5.05 1 0.02462 *

4 Raw data

User provided

Table 4.1: Calibration standards raw data
ID TYPE SERIE SIGNAL CONC_LEVEL
CAL CAL 1 3057 0.00049
CAL CAL 1 11600 0.00150
CAL CAL 1 155620 0.02000
CAL CAL 1 1556003 0.20000
CAL CAL 1 1720 0.00050
CAL CAL 1 7430 0.00150
CAL CAL 1 145383 0.02000
CAL CAL 1 1467917 0.20000
CAL CAL 2 2144 0.00050
CAL CAL 2 5250 0.00150
CAL CAL 2 142242 0.02000
CAL CAL 2 1425120 0.20000
CAL CAL 2 1847 0.00050
CAL CAL 2 8487 0.00150
CAL CAL 2 137393 0.02000
CAL CAL 2 1435514 0.20000
CAL CAL 3 2744 0.00050
CAL CAL 3 10473 0.00150
CAL CAL 3 141074 0.02000
CAL CAL 3 1400930 0.20000
CAL CAL 3 3422 0.00050
CAL CAL 3 10554 0.00150
CAL CAL 3 115746 0.02000
CAL CAL 3 1432539 0.20000

Figure 4.1: Calibration data

Table 4.2: Validation standards raw data
ID TYPE SERIE SIGNAL CONC_LEVEL
VAL VAL 1 1544 0.0005
VAL VAL 1 9992 0.0015
VAL VAL 1 155683 0.0200
VAL VAL 1 1767753 0.2000
VAL VAL 1 1422 0.0005
VAL VAL 1 11212 0.0015
VAL VAL 1 154877 0.0200
VAL VAL 1 1540288 0.2000
VAL VAL 1 3661 0.0005
VAL VAL 1 11527 0.0015
VAL VAL 1 154062 0.0200
VAL VAL 1 1466979 0.2000
VAL VAL 1 2712 0.0005
VAL VAL 1 6007 0.0015
VAL VAL 1 153763 0.0200
VAL VAL 1 1466715 0.2000
VAL VAL 2 2593 0.0005
VAL VAL 2 10558 0.0015
VAL VAL 2 140350 0.0200
VAL VAL 2 1413294 0.2000
VAL VAL 2 2880 0.0005
VAL VAL 2 9174 0.0015
VAL VAL 2 110853 0.0200
VAL VAL 2 1425878 0.2000
VAL VAL 2 1316 0.0005
VAL VAL 2 9065 0.0015
VAL VAL 2 136750 0.0200
VAL VAL 2 1368431 0.2000
VAL VAL 2 1908 0.0005
VAL VAL 2 9657 0.0015
VAL VAL 2 130685 0.0200
VAL VAL 2 1332873 0.2000
VAL VAL 3 1498 0.0005
VAL VAL 3 7730 0.0015
VAL VAL 3 142770 0.0200
VAL VAL 3 1430404 0.2000
VAL VAL 3 3415 0.0005
VAL VAL 3 10179 0.0015
VAL VAL 3 141127 0.0200
VAL VAL 3 1409384 0.2000
VAL VAL 3 3423 0.0005
VAL VAL 3 10600 0.0015
VAL VAL 3 138551 0.0200
VAL VAL 3 1403500 0.2000
VAL VAL 3 1281 0.0005
VAL VAL 3 6198 0.0015
VAL VAL 3 117689 0.0200
VAL VAL 3 1411380 0.2000

5 R Packages used

Aphalo, Pedro J. 2022a. Ggpmisc: Miscellaneous Extensions to Ggplot2. https://CRAN.R-project.org/package=ggpmisc

Aphalo, Pedro J. 2022b. Ggpp: Grammar Extensions to Ggplot2. https://CRAN.R-project.org/package=ggpp

Dahl, David B., David Scott, Charles Roosen, Arni Magnusson, and Jonathan Swinton. 2019. Xtable: Export Tables to LaTeX or HTML. http://xtable.r-forge.r-project.org/

DarĂłczi, Gergely, and Roman Tsegelskyi. 2022. Pander: An r Pandoc Writer. https://rapporter.github.io/pander/

Dowle, Matt, and Arun Srinivasan. 2022. Data.table: Extension of ‘Data.frame’. https://CRAN.R-project.org/package=data.table

Fox, John, and Sanford Weisberg. 2019. An R Companion to Applied Regression. Third. Thousand Oaks CA: Sage. https://socialsciences.mcmaster.ca/jfox/Books/Companion/

Fox, John, Sanford Weisberg, and Brad Price. 2022a. Car: Companion to Applied Regression. https://CRAN.R-project.org/package=car

Fox, John, Sanford Weisberg, and Brad Price. 2022b. carData: Companion to Applied Regression Data Sets. https://CRAN.R-project.org/package=carData

Garnier, Simon. 2021. Viridis: Colorblind-Friendly Color Maps for r. https://CRAN.R-project.org/package=viridis

Garnier, Simon.. 2022. viridisLite: Colorblind-Friendly Color Maps (Lite Version). https://CRAN.R-project.org/package=viridisLite

Hofner, Benjamin. 2021. papeR: A Toolbox for Writing Pretty Papers and Reports. https://CRAN.R-project.org/package=papeR

Hofner, Benjamin, and with contributions by many others. 2021. papeR: A Toolbox for Writing Pretty Papers and Reports. https://github.com/hofnerb/papeR

Hothorn, Torsten, Achim Zeileis, Richard W. Farebrother, and Clint Cummins. 2022. Lmtest: Testing Linear Regression Models. https://CRAN.R-project.org/package=lmtest

R Core Team. 2022. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/

Sievert, Carson. 2020. Interactive Web-Based Data Visualization with r, Plotly, and Shiny. Chapman; Hall/CRC. https://plotly-r.com

Sievert, Carson, Chris Parmer, Toby Hocking, Scott Chamberlain, Karthik Ram, Marianne Corvellec, and Pedro Despouy. 2022. Plotly: Create Interactive Web Graphics via Plotly.js. https://CRAN.R-project.org/package=plotly

Wickham, Hadley. 2016. Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. https://ggplot2.tidyverse.org

Wickham, Hadley, and Jennifer Bryan. 2022. Readxl: Read Excel Files. https://CRAN.R-project.org/package=readxl

Wickham, Hadley, Winston Chang, Lionel Henry, Thomas Lin Pedersen, Kohske Takahashi, Claus Wilke, Kara Woo, Hiroaki Yutani, and Dewey Dunnington. 2022. Ggplot2: Create Elegant Data Visualisations Using the Grammar of Graphics. https://CRAN.R-project.org/package=ggplot2

Wickham, Hadley, Romain François, Lionel Henry, and Kirill Mßller. 2022. Dplyr: A Grammar of Data Manipulation. https://CRAN.R-project.org/package=dplyr

Zeileis, Achim, and Gabor Grothendieck. 2005. “Zoo: S3 Infrastructure for Regular and Irregular Time Series.” Journal of Statistical Software 14 (6): 1–27. https://doi.org/10.18637/jss.v014.i06

Zeileis, Achim, Gabor Grothendieck, and Jeffrey A. Ryan. 2022. Zoo: S3 Infrastructure for Regular and Irregular Time Series (z’s Ordered Observations). https://zoo.R-Forge.R-project.org/

Zeileis, Achim, and Torsten Hothorn. 2002. “Diagnostic Checking in Regression Relationships.” R News 2 (3): 7–10. https://CRAN.R-project.org/doc/Rnews/

Zhu, Hao. 2021. kableExtra: Construct Complex Table with Kable and Pipe Syntax. https://CRAN.R-project.org/package=kableExtra